503 research outputs found
Rise and fall of debit-credit bookkeeping in China: History and analysis
This paper presents a century-long history of the debit-credit method of double-entry bookkeeping in China. Since introduced to China at the turn of this century, debit-credit bookkeeping has gone through many years of turbulence until 1992, when the Chinese government officially designated it as the standard bookkeeping method. Rather than taking a narrow technical perspective, this paper examines many historical events that shaped bookkeeping methods in China from a broad socioeconomic and political viewpoint. The story of debit-credit bookkeeping in China exemplifies how accounting is intertwined with the political and socioeconomic environment in which it exists
i2MapReduce: Incremental MapReduce for Mining Evolving Big Data
As new data and updates are constantly arriving, the results of data mining
applications become stale and obsolete over time. Incremental processing is a
promising approach to refreshing mining results. It utilizes previously saved
states to avoid the expense of re-computation from scratch.
In this paper, we propose i2MapReduce, a novel incremental processing
extension to MapReduce, the most widely used framework for mining big data.
Compared with the state-of-the-art work on Incoop, i2MapReduce (i) performs
key-value pair level incremental processing rather than task level
re-computation, (ii) supports not only one-step computation but also more
sophisticated iterative computation, which is widely used in data mining
applications, and (iii) incorporates a set of novel techniques to reduce I/O
overhead for accessing preserved fine-grain computation states. We evaluate
i2MapReduce using a one-step algorithm and three iterative algorithms with
diverse computation characteristics. Experimental results on Amazon EC2 show
significant performance improvements of i2MapReduce compared to both plain and
iterative MapReduce performing re-computation
A nonparametric learning framework for nonlinear robust output regulation
This paper proposes a nonparametric learning solution framework for a generic
internal model design of nonlinear robust output regulation. The global robust
output regulation problem for a class of nonlinear systems with output feedback
subject to a nonlinear exosystem can be tackled by constructing a linear
generic internal model, provided that a continuous nonlinear mapping exists. An
explicit continuous nonlinear mapping was constructed recently in [1] under the
assumption that the steady-state generator is linear in the exogenous signal.
We further relax such an assumption to a relaxed assumption that the
steady-state generator is polynomial in the exogenous signal. A nonparametric
learning framework is proposed to solve a linear time-varying equation to make
the nonlinear continuous mapping always exist. With the help of the proposed
framework, the nonlinear robust output regulation problem can be converted into
a robust non-adaptive stabilization problem for the augmented system with
integral Input-to-State Stable (iISS) inverse dynamics. Moreover, a dynamic
gain approach can adaptively raise the gain to a sufficiently large constant to
achieve stabilization without requiring any a priori knowledge of the
uncertainties appearing in the dynamics of the exosystem and the system. We
further apply the nonparametric learning framework to globally reconstruct and
estimate multiple sinusoidal signals with unknown frequencies without using
adaptive techniques. An explicit nonlinear mapping can directly provide the
estimated parameters, which will exponentially converge to the unknown
frequencies. As a result, a feedforward control design is proposed to solve the
output regulation using our nonparametric learning framework.Comment: 15 pages; Nonlinear control; iISS stability; output regulation;
parameter estimation; Non-adaptive contro
Search to Fine-tune Pre-trained Graph Neural Networks for Graph-level Tasks
Recently, graph neural networks (GNNs) have shown its unprecedented success
in many graph-related tasks. However, GNNs face the label scarcity issue as
other neural networks do. Thus, recent efforts try to pre-train GNNs on a
large-scale unlabeled graph and adapt the knowledge from the unlabeled graph to
the target downstream task. The adaptation is generally achieved by fine-tuning
the pre-trained GNNs with a limited number of labeled data. Despite the
importance of fine-tuning, current GNNs pre-training works often ignore
designing a good fine-tuning strategy to better leverage transferred knowledge
and improve the performance on downstream tasks. Only few works start to
investigate a better fine-tuning strategy for pre-trained GNNs. But their
designs either have strong assumptions or overlook the data-aware issue for
various downstream datasets. Therefore, we aim to design a better fine-tuning
strategy for pre-trained GNNs to improve the model performance in this paper.
Given a pre-trained GNN, we propose to search to fine-tune pre-trained graph
neural networks for graph-level tasks (S2PGNN), which adaptively design a
suitable fine-tuning framework for the given labeled data on the downstream
task. To ensure the improvement brought by searching fine-tuning strategy, we
carefully summarize a proper search space of fine-tuning framework that is
suitable for GNNs. The empirical studies show that S2PGNN can be implemented on
the top of 10 famous pre-trained GNNs and consistently improve their
performance. Besides, S2PGNN achieves better performance than existing
fine-tuning strategies within and outside the GNN area. Our code is publicly
available at \url{https://anonymous.4open.science/r/code_icde2024-A9CB/}
- …